90 research outputs found

    An Agent-Based System with Personalization and Intelligent Assistance Services for Facilitating Knowledge Sharing

    Get PDF
    The scenario of distributed knowledge in organization, lack of understanding of knowledge sharing benefits and technology inadequacies are the main barriers to knowledge sharing facilitation. A more user-centered application through personalization and intelligent assistance technique are identified as the evolution in knowledge sharing facilitation research. As response to these challenges, this study is dedicated to approach knowledge sharing facilitation with an agent-based system. Agent technology is a promising solution to knowledge sharing facilitation. Agent technology could provide personalization and intelligent assistance to give a more human-centered approach towards users in knowledge sharing participation. This thesis focuses on automatic interest identification and knowledge member recommendation in order to reduce user’s tasks and ease them to participate in knowledge sharing. The proposed agent based system is called KSFaci (Knowledge Sharing Facilitator). KSFaci provides personalization and intelligent assistance to users by offering knowledge member recommendation according to their interest preferences. This timely action gives users resources to find help and they can interact with each other to share or exchange knowledge. The first agent, Profiler is able to monitor user navigational behavior and build user profile on behalf of the user. The Recommender agent then determines the user’s most preferred interest and matches them against other users sharing similar interest. The main algorithms used are profile determination and user similarity. The recommendation services provided reduce users burden from manual browsing and searching for knowledge reference resources. KSFaci is embedded in web environment and is implemented using Java Servlet and runs under Apache server. The performance of KSFaci is evaluated using a four-factor evaluation metrics covering the user profile preciseness, recommendation service, staff directory and document repository. Several techniques have been used including weighted respond analysis, two-point scale, Likert-scale survey analysis and overlap analysis. User satisfaction result indicate that the agent-based approach used; by identifying user’s interests and establishing knowledge network based on interests of its users is capable in facilitating knowledge sharing. In conclusion, the recommended knowledge network created based on the automatic interest identification has now become medium for users to refer for knowledge sources and later perform knowledge sharing tasks

    Semantic-based medical records retrieval via medical-context aware query expansion and ranking.

    Get PDF
    Efficient retrieval of medical records involves contextual understanding of both the query and the records contents. This will enhance the searching effectiveness beyond merely keyword matching and is assisted by analyzing its semantics notion such as by the utilization of the MeSH thesaurus .The query is annotated and expanded by information from the deep medical contextual understanding.This is because typically medical records contain medical terminologies which may not be included in the user query but is important for accurate search hit. Besides, the terminologies have synonyms which should be utilized for richer and expanded query.The main contribution of the paper is the semantic-based retrieval technique by utilizing context-aware query expansion and search ranking method. Medical domain is chosen as a proof of concept and a medical record retrieval application was developed.The source of medical records are obtained from the ImageCLEF 2010 dataset which also houses a series of evaluation campaign such as photo annotation, robot vision and Wikipedia retrieval. This paper addresses the following problems: (i)semantic-based query expansion technique which increase the content awareness ability,(ii) MeSH- manipulated indexer which entails medical terminologies and their synonym,(iii) adoption of extended Boolean matching to measure similarity between query and documents, and (iv)ranking method which prioritizes matched expanded query size.The results were measured using precision, recall and mean average precision (MAP) score.Comparing against other approaches, our method has several achievements including; (i) more efficient access of MeSH thesaurus through the manipulated indexer compared to its original form; (ii) enrichment of query expansion using synonym term can improve mean average precision (MAP) value as opposed to standard query expansion; (iii) our comprehensive ranking method achieved high recall. According to MAP score we are in the top five run system amongst submitted run systems in ImageCLEF2010 medical task

    Process-oriented ontology building methodology for solving unbalanced competency of the ontology.

    Get PDF
    During two last decades, many ontology building methodologies have been introduced. Since these methodologies are based on the society's lexicon, rather than society's process, the purpose, scope and required entities of the ontology cannot be clearly defined. Therefore, the obtained ontology is either incompetent or over-competent. Hence, this research addresses the problem of the unbalanced competency of the ontology by proposing Process-Oriented Ontology Building methodology (POOB-Methodology). The proposed method defines and express the purpose and scope of the ontology as the ontological concepts. Since the proposed method is process-oriented, and all the societies have a finite number of the processes, the boundary of the ontology can be clearly defined which helps the ontology builder to be concentrated on the main concepts in order to obtain the required entities. Therefore as a result, the obtained ontology is neither incompetent nor over-competent, but enough competent in solving the society's problem. This research contributes by clearly determining the first step of ontology building, providing more coordination between ontology builder and domain experts

    Text fragment extraction using incremental evolving fuzzy grammar fragments learner

    Get PDF
    Additional structure within free texts can be utilized to assist in identification of matching items and can benefit many intelligent text pattern recognition applications. This paper presents an incremental evolving fuzzy grammar (IEFG) method that focuses on the learning of underlying text fragment patterns and provides an efficient fuzzy grammar representation that exploits both syntactic and semantic properties. This notion is quantified via (i) fuzzy membership which measures the degree of membership for a text fragment in a semantic grammar class and (ii) fuzzy grammar similarity which estimates the similarity between two grammars (iii) grammar combination which combines and generalizes the grammar at a minimal generalization. Terrorism incidents data from the United States World Incidents Tracking System (WITS) are used in experiments and presented throughout the paper. A comparison with regular expression methods is made in identification of text fragments representing times. The application of text fragment extraction using IEFG is demonstrated in event type, victim type, dead count and wounded count detection with WITS XML-tagged data used as golden standard. Results have shown the efficiency and practicality of IEFG

    An improved deep learning-based approach for sentiment mining

    Get PDF
    The sentiment mining approaches can typically be divided into lexicon and machine learning approaches. Recently there are an increasing number of approaches which combine both to improve the performance when used separately. However, this still lacks contextual understanding which led to the introduction of deep learning approaches which allows for semantic compositionality over a sentiment treebank. This paper enhances the deep learning approach with semantic lexicon so that scores can be computed in-stead merely nominal classification. Besides, neutral classification is also improved. Results suggest that the approach outperforms its original

    Linguistic rule-based methods for the extraction of medical summaries to benefit patients progression tracking

    Get PDF
    Clinical narratives contain useful information that can complement the patient progress records which are obtained throughout the patient’s medical and treatment duration. In order to understand the clinical narratives content, medical concepts that include events and temporal information should be performed. This study addresses this issue based on a linguistic rule-based approach which combines domain knowledge, extraction modules and temporal linker component. This is in contrast to the fundamentals adopted by the major works based on machine learning. The proposed work’s performance is therefore evaluated against a machine learning based approach and a knowledge intensive approach. Results have shown its strength regardless of its different nature

    Natural language query translation for semantic search

    Get PDF
    Querying semantic knowledge base often requires the understanding of the ontology schema and proficiency with the query language. Several approaches have existed but mainly dealing with the disambiguation problem which are solved by executing clarification dialogues. This paper addresses the automatic translation of natural language queries into its SPARQL equivalent statement without involving clarification dialogues. We demonstrate that this is achieveable by annotating all ontology concepts in the query. Next the connections between the classes are identified so that the shared properties can be loaded before they are matched with the terms in the query. Then, the identified ontology triples are arranged to construct a valid SPARQL query according to their relation in the ontology schema. We compare the performance of MyAutoSPARQL against FREyA, an NLI that utilizes clarification dialogue. We evaluate our approach on selection typed queries and compare the performance against FREyA. The results show that despite the absent of clarification dialogues, MyAutoSPARQL performance is better than FREyA

    Evolving fuzzy grammar for crime texts categorization

    Get PDF
    Text mining refers to the activity of identifying useful information from natural language text. This is one of the criteria practiced in automated text categorization. Machine learning (ML) based methods are the popular solution for this problem. However, the developed models typically provide low expressivity and lacking in human-understandable representation. In spite of being highly efficient, the ML based methods are established in train–test setting, and when the existing model is found insufficient, the whole processes need to be reinvented which implies train–test–retrain and is typically time consuming. Furthermore, retraining the model is not usually practical and feasible option whenever there is continuous change. This paper introduces the evolving fuzzy grammar (EFG) method for crime texts categorization. In this method, the learning model is built based on a set of selected text fragments which are then transformed into their underlying structure called fuzzy grammars. The fuzzy notion is used because the grammar matching, parsing and derivation involve uncertainty. Fuzzy union operator is also used to combine and transform individual text fragment grammars into more general representations of the learned text fragments. The set of learned fuzzy grammars is influenced by the evolution in the seen pattern; the learned model is slightly changed (incrementally) as adaptation, which does not require the conventional redevelopment. The performance of EFG in crime texts categorization is evaluated against expert-tagged real incidents summaries and compared against C4.5, support vector machines, naïve Bayes, boosting, and k-nearest neighbour methods. Results show that the EFG algorithm produces results that are close in performance with the other ML methods while being highly interpretable, easily integrated into a more comprehensive grammar system and with lower model retraining adaptability time

    Semantic shot classification in soccer videos via playfield ratio and object size considerations.

    Get PDF
    This paper presents a semantic shot classification algorithm for soccer videos. Generally, each shot within a match video is assigned either a far or close up-view class label. Initially, the playfield region for each frame within a shot is identified through low-level color image processing. An additional property is then considered namely the largest object size overlapping the playfield. Class labels are then accordingly assigned to each frame based on carefully constructed rule-sets. Majority voting is finally performed where the dominant frame labels within each shot is used as the ultimate class label. Experiments conducted on six soccer matches with varying camera shooting styles have been very promising, where the additional consideration of largest object size is able to significantly reduce the number of misclassifications

    Open science cyber-infrastructure framework for next generation disaster analytics

    Get PDF
    The open science movement is gaining popularity due to the stability of data storage and network technologies besides the availability of open data portal in many countries. However, a case study that focuses on the requirement and design of the cyber-infrastructure for open science is limited. This paper reports the assessment of existing infrastructure for disaster information management as an open science activity based on the Sendai Framework. A framework that combines the open data quality and the next generation repository system requirements based on a case study on the flood and forest fire management in Malaysia and Indonesia is proposed. This paper fills the gap between the focus on open data framework and the next generation repository system based on the requirements from a recent international collaboration on climate research studies
    corecore